34 research outputs found

    Efficient Lock-free Binary Search Trees

    Full text link
    In this paper we present a novel algorithm for concurrent lock-free internal binary search trees (BST) and implement a Set abstract data type (ADT) based on that. We show that in the presented lock-free BST algorithm the amortized step complexity of each set operation - {\sc Add}, {\sc Remove} and {\sc Contains} - is O(H(n)+c)O(H(n) + c), where, H(n)H(n) is the height of BST with nn number of nodes and cc is the contention during the execution. Our algorithm adapts to contention measures according to read-write load. If the situation is read-heavy, the operations avoid helping pending concurrent {\sc Remove} operations during traversal, and, adapt to interval contention. However, for write-heavy situations we let an operation help pending {\sc Remove}, even though it is not obstructed, and so adapt to tighter point contention. It uses single-word compare-and-swap (\texttt{CAS}) operations. We show that our algorithm has improved disjoint-access-parallelism compared to similar existing algorithms. We prove that the presented algorithm is linearizable. To the best of our knowledge this is the first algorithm for any concurrent tree data structure in which the modify operations are performed with an additive term of contention measure.Comment: 15 pages, 3 figures, submitted to POD

    Asynchronous Optimization Methods for Efficient Training of Deep Neural Networks with Guarantees

    Full text link
    Asynchronous distributed algorithms are a popular way to reduce synchronization costs in large-scale optimization, and in particular for neural network training. However, for nonsmooth and nonconvex objectives, few convergence guarantees exist beyond cases where closed-form proximal operator solutions are available. As most popular contemporary deep neural networks lead to nonsmooth and nonconvex objectives, there is now a pressing need for such convergence guarantees. In this paper, we analyze for the first time the convergence of stochastic asynchronous optimization for this general class of objectives. In particular, we focus on stochastic subgradient methods allowing for block variable partitioning, where the shared-memory-based model is asynchronously updated by concurrent processes. To this end, we first introduce a probabilistic model which captures key features of real asynchronous scheduling between concurrent processes; under this model, we establish convergence with probability one to an invariant set for stochastic subgradient methods with momentum. From the practical perspective, one issue with the family of methods we consider is that it is not efficiently supported by machine learning frameworks, as they mostly focus on distributed data-parallel strategies. To address this, we propose a new implementation strategy for shared-memory based training of deep neural networks, whereby concurrent parameter servers are utilized to train a partitioned but shared model in single- and multi-GPU settings. Based on this implementation, we achieve on average 1.2x speed-up in comparison to state-of-the-art training methods for popular image classification tasks without compromising accuracy

    Brief Announcement: Non-Blocking Dynamic Unbounded Graphs with Worst-Case Amortized Bounds

    Get PDF
    This paper reports a new concurrent graph data structure that supports updates of both edges and vertices and queries: Breadth-first search, Single-source shortest-path, and Betweenness centrality. The operations are provably linearizable and non-blocking

    Non-Blocking Dynamic Unbounded Graphs with Worst-Case Amortized Bounds

    Get PDF

    Learned Lock-free Search Data Structures

    Full text link
    Non-blocking search data structures offer scalability with a progress guarantee on high-performance multi-core architectures. In the recent past, "learned queries" have gained remarkable attention. It refers to predicting the rank of a key computed by machine learning models trained to infer the cumulative distribution function of an ordered dataset. A line of works exhibits the superiority of learned queries over classical query algorithms. Yet, to our knowledge, no existing non-blocking search data structure employs them. In this paper, we introduce \textbf{Kanva}, a framework for learned non-blocking search. Kanva has an intuitive yet non-trivial design: traverse down a shallow hierarchy of lightweight linear models to reach the "non-blocking bins," which are dynamic ordered search structures. The proposed approach significantly outperforms the current state-of-the-art -- non-blocking interpolation search trees and elimination (a,b) trees -- in many workload and data distributions. Kanva is provably linearizable
    corecore